18 research outputs found

    Linear optimization on modern GPUs

    Get PDF
    Abstract Optimization algorithms are becoming increasingly more important in many areas, such as finance and engineering. Typically, real problems involve several hundreds of variables, and are subject to as many constraints. Several methods have been developed trying to reduce the theoretical time complexity. Nevertheless, when problems exceed reasonable sizes they end up being very computationally intensive. Heterogeneous systems composed by coupling commodity CPUs and GPUs are becoming relatively cheap, highly performing systems. Recent developments of GPGPU technologies give even more powerful control over them. In this paper, we show how we use a revised simplex algorithm for solving linear programming problems originally described by Dantzig for both our CPU and GPU implementations. Previously, this approach has showed not to scale beyond around 200 variables. However, by taking advantage of modern libraries such as ATLAS for matrix-matrix multiplication, and the NVIDIA CUDA programming library on recent GPUs, we show that we can scale to problem sizes up to at least 2000 variables in our experiments for both architectures. On the GPU, we also achieve an appreciable precision on large problems with thousands of variables and constraints while achieving between 2X and 2.5X speed-ups over the serial ATLAS-based CPU version. With further tuning of both the algorithm and its implementations, even better results should be achievable for both the CPU and GPU versions

    Automatic Generation of Efficient Linear Algebra Programs

    Full text link
    The level of abstraction at which application experts reason about linear algebra computations and the level of abstraction used by developers of high-performance numerical linear algebra libraries do not match. The former is conveniently captured by high-level languages and libraries such as Matlab and Eigen, while the latter expresses the kernels included in the BLAS and LAPACK libraries. Unfortunately, the translation from a high-level computation to an efficient sequence of kernels is a task, far from trivial, that requires extensive knowledge of both linear algebra and high-performance computing. Internally, almost all high-level languages and libraries use efficient kernels; however, the translation algorithms are too simplistic and thus lead to a suboptimal use of said kernels, with significant performance losses. In order to both achieve the productivity that comes with high-level languages, and make use of the efficiency of low level kernels, we are developing Linnea, a code generator for linear algebra problems. As input, Linnea takes a high-level description of a linear algebra problem and produces as output an efficient sequence of calls to high-performance kernels. In 25 application problems, the code generated by Linnea always outperforms Matlab, Julia, Eigen and Armadillo, with speedups up to and exceeding 10x

    Automatically Harnessing Sparse Acceleration

    Get PDF
    Sparse linear algebra is central to many scientific programs, yet compilers fail to optimize it well. High-performance libraries are available, but adoption costs are significant. Moreover, libraries tie programs into vendor-specific software and hardware ecosystems, creating non-portable code. In this paper, we develop a new approach based on our specification Language for implementers of Linear Algebra Computations (LiLAC). Rather than requiring the application developer to (re)write every program for a given library, the burden is shifted to a one-off description by the library implementer. The LiLAC-enabled compiler uses this to insert appropriate library routines without source code changes. LiLAC provides automatic data marshaling, maintaining state between calls and minimizing data transfers. Appropriate places for library insertion are detected in compiler intermediate representation, independent of source languages. We evaluated on large-scale scientific applications written in FORTRAN; standard C/C++ and FORTRAN benchmarks; and C++ graph analytics kernels. Across heterogeneous platforms, applications and data sets we show speedups of 1.1×\times to over 10×\times without user intervention.Comment: Accepted to CC 202

    Notulae to the Italian native vascular flora: 8

    Get PDF
    In this contribution, new data concerning the distribution of native vascular flora in Italy are presented. It includes new records, confirmations, exclusions, and status changes to the Italian administrative regions for taxa in the genera Ajuga, Chamaemelum, Clematis, Convolvulus, Cytisus, Deschampsia, Eleocharis, Epipactis, Euphorbia, Groenlandia, Hedera, Hieracium, Hydrocharis, Jacobaea, Juncus, Klasea, Lagurus, Leersia, Linum, Nerium, Onopordum, Persicaria, Phlomis, Polypogon, Potamogeton, Securigera, Sedum, Soleirolia, Stachys, Umbilicus, Valerianella, and Vinca. Nomenclatural and distribution updates, published elsewhere, and corrigenda are provided as Suppl. material 1

    Il supporto della SISV alla realizzazione di un manuale nazionale per il monitoraggio degli habitat della Direttiva 92/43/EEC in Italia.

    Get PDF
    A partire dall'entrata in vigore della Direttiva 92/43/EEC, la sorveglianza dello stato di conservazione degli habitat elencati nell'Allegato I ed il relativo monitoraggio periodico a intervalli di sei anni sono diventati un obbligo per tutti i paesi membri dell'UE, in base a quanto previsto negli Articoli 11 e 17. Nel 2011 è stato pubblicato un documento che fornisce le linee guida di riferimento europee per il monitoraggio di habitat e specie (Evans & Arvela 2011). Su questa base metodologica, la Società Italiana di Scienza della Vegetazione (SISV), avvalendosi di un ampio gruppo di soci esperti, ha avviato un dibattito interno su principi, criteri, parametri e strumenti per il monitoraggio degli habitat di Allegato I e dei tipi di vegetazione in essi rappresentati. Il progetto è stato promosso dal Ministero dell'Ambiente e della Tutela del Territorio e del Mare e coordinato dall'Istituto Superiore per la Protezione e la Ricerca Ambientale, e si trova al momento in una fase prossima alla conclusione. A partire dalla documentazione già prodotta a livello nazionale per gli habitat italiani (Biondi et al., 2009, 2012, 2014; Genovesi et al., 2014), diversi aspetti critici sono stati esaminati attraverso una discussione scientifica ampiamente condivisa. In particolare, sono stati affrontati: gli aspetti legati alla scelta di strumenti adeguati per valutare i parametri area, struttura e funzione, prospettive future; il concetto di "specie tipica"; i metodi di campionamento habitat-specifici appropriati. Il protocollo sviluppato si pone come uno strumento pratico ed efficace, scientificamente valido e in linea con gli standard metodologici internazionali. Il suo utilizzo permetterà una raccolta armonizzata di dati su scala nazionale, rendendo possibile una valutazione comparata dello stato di conservazione di ciascun habitat
    corecore